Current Issue : July - September Volume : 2013 Issue Number : 3 Articles : 5 Articles
A simple application programinterface (API) for Java programs running on a wiki is implemented experimentally. A Java program\r\nwith the API can be running on a wiki, and the Java program can save its data on the wiki. The Java program consists of PukiWiki,\r\nwhich is a popular wiki in Japan, and a plug-in, which starts up Java programs and classes of Java. A Java applet with default access\r\nprivilege cannot save its data at a local host.We have constructed an API of applets for easy and unified data input and output at a\r\nremote host.We also combined the proposed API and the wiki system by introducing a wiki tag for starting Java applets. It is easy\r\nto introduce new types of applications using the proposed API.We have embedded programs such as a simple text editor, a simple\r\nmusic editor, a simple drawing program, and programming environments in a PukiWiki system using this API....
As a result of the proliferation of Web 2.0 style web sites, the practice of mashup services has become increasingly popular in the web development community. While mashup services bring flexibility and speed in delivering new valuable services to consumers, the issue of accountability associated with the mashup practice remains largely ignored by the industry. Furthermore, realizing the great benefits of mashup services, industry leaders are eagerly pushing these solutions into the enterprise arena. Although enterprise mashup services hold great promise in delivering a flexible SOA solution in a business context, the lack of accountability in current mashup solutions may render this ineffective in the enterprise environment. This paper defines accountability for mashup services, analyses the underlying issues in practice, and finally proposes a framework and ontology to model accountability. This model may then be used to develop effective accountability solutions for mashup environments. Compared to the traditional method of using QoS or SLA monitoring to address accountability requirements, our approach addresses more fundamental aspects of accountability specification to facilitate machine interpretability and therefore enabling automation in monitoring....
Software maintenance is an important activity in software development. Some development methodologies such as the objectoriented\r\nhave contributed in improving maintainability of software. However, crosscutting concerns are still challenges that affect\r\nthemaintainability ofOOsoftware. In this paper, we discuss our case study to assess the extent ofmaintainability improvement that\r\ncan be achieved by employing aspect-oriented programming. Aspect-oriented programming (AOP) is a relatively new approach\r\nthat emphasizes dealing with crosscutting concerns. To demonstrate the maintainability improvement, we refactored a COTSbased\r\nsystem known as OpenBravoPOS using AspectJ and compared its maintainability with the original OO version. We used\r\nboth structural complexity and concern level metrics. Our results show an improvement of maintainability in the AOP version of\r\nOpenBravoPOS....
Coordination models and languages aremeant to provide abstractions and mechanisms to harness the space of interaction as one of\r\nthe foremost sources of complexity in computational systems. Nature-inspired computing aims at understanding the mechanisms\r\nand patterns of complex natural systems in order to bring their most desirable features to computational systems.Thus, the promise\r\nof nature-inspired coordination models is to prove themselves fundamental in the design of complex computational systemsââ?¬â?such\r\nas intelligent, knowledge-intensive, pervasive, adaptive, and self-organising ones. In this paper, we survey the most relevant natureinspired\r\ncoordination models in the literature, focussing in particular on tuple-based models, and foresee the most interesting\r\nresearch trends in the field....
The wide availability of high-performance computing systems, Grids and Clouds, allowed scientists and engineers to implement more and more complex applications to access and process large data repositories and run scientific experiments in silico on distributed computing platforms. Most of these applications are designed as workflows that include data analysis, scientific computation methods, and complex simulation techniques. Scientific applications require tools and high-level mechanisms for designing and executing complex workflows. For this reason, in the past years, many efforts have been devoted towards the development of distributed workflow management systems for scientific applications. This paper discusses basic concepts of scientific workflows and presents workflow system tools and frameworks used today for the implementation of application in science and engineering on high-performance computers and distributed systems. In particular, the paper reports on a selection of workflow systems largely used for solving scientific problems and discusses some open issues and research challenges in the area....
Loading....